Goto

Collaborating Authors

 moreau envelope





A Organization of the Appendices

Neural Information Processing Systems

In the Appendix, we give proofs of all results from the main text. We say a function f: R Y! R is M -Lipschitz if for any y 2Y and ˆ y We can also define the Moreau envelope of a function f: R Y! R by The proof of all results in this section can be straightforwardly extended to these settings. Boyd et al. 2004; Bauschke, Combettes, et al. 2011; Rockafellar 1970), but is also useful and Interestingly, there is a similar equivalent characterization for Lipschitz functions as well. Finally, we show that any smooth loss is square-root-Lipschitz. Lipschitz losses is more general than the class of smooth losses studied in Srebro et al. 2010 .


Uniform Convergence with Square-Root Lipschitz Loss

Neural Information Processing Systems

In linear regression, interpolating the square loss is equivalent to interpolating many other losses (such as the absolute loss) on the training set. Similarly, in the context of linear classification, many works (Soudry et al. 2018; Ji and Telgarsky 2019; Muthukumar et al. 2021) have shown that optimizing






Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions

Neural Information Processing Systems

In this paper, we study a class of non-smooth non-convex problems in the form of $\min_{x}[\max_{y\in\mathcal Y}\phi(x, y) - \max_{z\in\mathcal Z}\psi(x, z)]$, where both $\Phi(x) = \max_{y\in\mathcal Y}\phi(x, y)$ and $\Psi(x)=\max_{z\in\mathcal Z}\psi(x, z)$ are weakly convex functions, and $\phi(x, y), \psi(x, z)$ are strongly concave functions in terms of $y$ and $z$, respectively. It covers two families of problems that have been studied but are missing single-loop stochastic algorithms, i.e., difference of weakly convex functions and weakly convex strongly-concave min-max problems. We propose a stochastic Moreau envelope approximate gradient method dubbed SMAG, the first single-loop algorithm for solving these problems, and provide a state-of-the-art non-asymptotic convergence rate. The key idea of the design is to compute an approximate gradient of the Moreau envelopes of $\Phi, \Psi$ using only one step of stochastic gradient update of the primal and dual variables. Empirically, we conduct experiments on positive-unlabeled (PU) learning and partial area under ROC curve (pAUC) optimization with an adversarial fairness regularizer to validate the effectiveness of our proposed algorithms.